Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #4630 +/- ##
==========================================
- Coverage 89.61% 89.57% -0.04%
==========================================
Files 444 444
Lines 21505 21505
==========================================
- Hits 19272 19264 -8
- Misses 2233 2241 +8 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Should we cycle it, rather than using a static value? How long does it take to get the first response back from apollo? Should we say Maybe that doesn't matter much but "Thinking" is a) boring b) not very insightful. So I'm just wondering if there's something cheap we can use instead |
|
@josephjclark I think it's more frustrating to wait for a system to connect than to wait for it to do something properly, so I'm not sure about "calling assistant". What if we took the opportunity to reinforce the voice of the assistant here with a more playful pool like:
|
|
@hanna-paasivirta those are great fun, and in keeping with the OpenFn corporate voice. But I also don't think I want the AI assistant to have that same voice? It's too fun, and I know I'm a grumpy old man but I don't want it to be fun. This also raises a wider point about what we want the assistant's voice to be like, for which we'll have to canvas more opinions. For now (as we just discussed) let's have a simple pool like |
|
It appears for a bit too long for "reading" to make sense (it makes me think why does it read slower than me). I've gone with "Thinking about the question...", "Working on it...", "Processing your request...", "Examining your question...", "Taking a look...", and "Looking into it..." They don't make 100% sense in all situations but I think they're adequate. |
Security ReviewSee the workflow run for the raw Claude output. |
| const loadingStatus = useMemo( | ||
| () => | ||
| LOADING_STATUSES[Math.floor(Math.random() * LOADING_STATUSES.length)], | ||
| [isLoading] | ||
| ); |
There was a problem hiding this comment.
Hey @hanna-paasivirta this is a very interesting one. It caught my eyes and I went to the React docs of useMemo and saw that the factory "should be a pure function" and that React reserves the right to throw away cached values (useMemo caveats). Math.random() inside the factory breaks that contract, and in Strict Mode dev the factory runs twice per render too, which muddies the intent a bit. It works fine today, but it's leaning on behavior React documents as non-guaranteed. I think a cleaner shape uses the "store information from previous renders" pattern from the useState docs:
const pickStatus = () => LOADING_STATUSES[Math.floor(Math.random() * LOADING_STATUSES.length)];
const [loadingStatus, setLoadingStatus] = useState(pickStatus);
const wasLoading = useRef(isLoading);
if (isLoading && !wasLoading.current) {
setLoadingStatus(pickStatus());
}
wasLoading.current = isLoading;That should provide the same behavior, new random phrase each time loading starts and stable within a session, but the randomness lives in useState where impurity is fine, and the trigger is an explicit false -> true transition.
I don't think this is a blocker tho. I just learnt something and wanted to share and maybe it could help make the code better
There was a problem hiding this comment.
Yeah I second this - it's a weird usage of useMemo.
@elias-ba what if we did this on sendMessage - when the message is sent, set the streaming status to one of these random messages. No need for a transient local state now, and we guarantee a new one will be picked each time.
Description
This PR changes the initial progress status shown to the user when starting a conversation with the AI assistants from "Generating response..." to "Thinking..." . This change is required because we are adding a more detailed stream of status updates from Apollo, where "Generating response" would feel out of place. OpenFn/apollo#452
Validation steps
AI Usage
Please disclose whether you've used AI anywhere in this PR (it's cool, we just
want to know!):
You can read more details in our
Responsible AI Policy
Pre-submission checklist
/reviewwith Claude Code)
(e.g.,
:owner,:admin,:editor,:viewer)